Goto

Collaborating Authors

 yolov7 model


Improved YOLOv7 model for insulator defect detection

Wang, Zhenyue, Yuan, Guowu, Zhou, Hao, Ma, Yi, Ma, Yutang, Chen, Dong

arXiv.org Artificial Intelligence

Insulators are crucial insulation components and structural supports in power grids, playing a vital role in the transmission lines. Due to temperature fluctuations, internal stress, or damage from hail, insulators are prone to injury. Automatic detection of damaged insulators faces challenges such as diverse types, small defect targets, and complex backgrounds and shapes. Most research for detecting insulator defects has focused on a single defect type or a specific material. However, the insulators in the grid's transmission lines have different colors and materials. Various insulator defects coexist, and the existing methods have difficulty meeting the practical application requirements. Current methods suffer from low detection accuracy and mAP0.5 cannot meet application requirements. This paper proposes an improved YOLOv7 model for multi-type insulator defect detection. First, our model replaces the SPPCSPC module with the RFB module to enhance the network's feature extraction capability. Second, a CA mechanism is introduced into the head part to enhance the network's feature representation ability and to improve detection accuracy. Third, a WIoU loss function is employed to address the low-quality samples hindering model generalization during training, thereby improving the model's overall performance. The experimental results indicate that the proposed model exhibits enhancements across various performance metrics. Specifically, there is a 1.6% advancement in mAP_0.5, a corresponding 1.6% enhancement in mAP_0.5:0.95, a 1.3% elevation in precision, and a 1% increase in recall. Moreover, the model achieves parameter reduction by 3.2 million, leading to a decrease of 2.5 GFLOPS in computational cost. Notably, there is also an improvement of 2.81 milliseconds in single-image detection speed.


Effects of Real-Life Traffic Sign Alteration on YOLOv7- an Object Recognition Model

Riya, Farhin Farhad, Hoque, Shahinul, Onim, Md Saif Hassan, Michaud, Edward, Begoli, Edmon

arXiv.org Artificial Intelligence

The advancement of Image Processing has led to the widespread use of Object Recognition (OR) models in various applications, such as airport security and mail sorting. These models have become essential in signifying the capabilities of AI and supporting vital services like national postal operations. However, the performance of OR models can be impeded by real-life scenarios, such as traffic sign alteration. Therefore, this research investigates the effects of altered traffic signs on the accuracy and performance of object recognition models. To this end, a publicly available dataset was used to create different types of traffic sign alterations, including changes to size, shape, color, visibility, and angles. The impact of these alterations on the YOLOv7 (You Only Look Once) model's detection and classification abilities were analyzed. It reveals that the accuracy of object detection models decreases significantly when exposed to modified traffic signs under unlikely conditions. This study highlights the significance of enhancing the robustness of object detection models in real-life scenarios and the need for further investigation in this area to improve their accuracy and reliability.


'Tis the Season to Explore our Best Deep Dives

#artificialintelligence

"Heuristics" may sound like a fancy word, but as Holly Emblem explains, it's in fact a clear, streamlined approach to problem-solving. Holly's post provides a clear definition and practical data science use cases for you to consider. A comprehensive look at the latest in object detection. There are deep dives, and then there's Chris Hughes and Bernat Puig Camps' overview of the YOLOv7 model. Don't let its hefty 50-minute reading time scare you--it's engaging and easy to follow, and offers a smooth blend of theory and practice.


Fine Tuning YOLOv7 on Custom Dataset

#artificialintelligence

In this blog post, we will be fine tuning the YOLOv7 object detection model on a real-world pothole detection dataset. Since its inception, the YOLO family of object detection models have come a long way. YOLOv7 is the most recent addition to this famous anchor-based single-shot family of object detectors. It comes with a bunch of improvements which includes state-of-the-art accuracy and speed. Benchmarked on the COCO dataset, the YOLOv7 tiny model achieves more than 35% mAP and the YOLOv7 (normal) model achieves more than 51% mAP. It is also equally important that we get good results when fine tuning such a state-of-the-art model. For that reason, we will be fine tuning YOLOv7 on a real-world pothole detection dataset in this blog post.


YOLOv7 Paper Explanation: Object Detection and YOLOv7 Pose

#artificialintelligence

Apart from architectural modifications, there are several other improvements. Go through the YOLO series for detailed information. YOLOv7 improves speed and accuracy by introducing several architectural reforms. Similar to Scaled YOLOv4, YOLOv7 backbones do not use ImageNet pre-trained backbones. Rather, the models are trained using the COCO dataset entirely. The similarity can be expected because YOLOv7 is written by the same authors of Scaled YOLOv4.